Goto

Collaborating Authors

 free pass


No free pass for internet platforms on child safety, Starmer says

BBC News

No online platform will get a free pass on children's safety on the internet in new plans, Prime Minister Sir Keir Starmer has said. The government is pledging to close loopholes in existing laws designed to protect children online and will consult on a social media ban for under-16s as part of plans for online safety. There are also plans to introduce powers to speedily change the law in response to developing online behaviours, and to update legislation to preserve children's social media and online data - as campaigned for by the group Jools' Law. Opponents accused the government of inaction, and have called for Parliament to be given a vote on the social media ban for children. The government had already said it would launch the public consultation in March, seeking opinions about restricting children's access to AI chatbots and limiting infinite scrolling features for children - also known as doomscrolling.


'Big, beautiful' bill could give a free pass for Big Tech to kill jobs

FOX News

Gladstone A.I. co-founders and CEOs Edouard Harris and Jeremie Harris explain the major role that A.I will play in national security and warfare on'The Will Cain Show.' Buried in the budget reconciliation package recently passed by the House is a moratorium that would block every U.S. state from passing laws on artificial intelligence or automation for the next decade. Why would lawmakers try to sneak a 10-year ban on AI regulation into a budget bill? The draft moratorium is the result of aggressive lobbying from companies that are already using AI to undermine workers and eliminate jobs. To understand what a ban on AI regulation could mean, ask yourself: what are lawmakers talking about when they talk about "AI?" Most of us imagine programs like ChatGPT churning out text and images. But Big Tech sees something else: disruption, control, profit. They want driverless trucks crisscrossing our roads without oversight.


Kids' Cartoons Get a Free Pass From YouTube's Deepfake Disclosure Rules

WIRED

YouTube has updated its rulebook for the era of deepfakes. Starting today, anyone uploading video to the platform must disclose certain uses of synthetic media, including generative AI, so viewers know what they're seeing isn't real. YouTube says it applies to "realistic" altered media such as "making it appear as if a real building caught fire" or swapping "the face of one individual with another's." The new policy shows YouTube taking steps that could help curb the spread of AI-generated misinformation as the US presidential election approaches. It is also striking for what it permits: AI-generated animations aimed at kids are not subject to the new synthetic content disclosure rules.


Joy Buolamwini: "We're giving AI companies a free pass"

MIT Technology Review

I can tell Buolamwini finds the cover amusing. She takes a picture of it. Times have changed a lot since 1961. In her new memoir, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Buolamwini shares her life story. In many ways she embodies how far tech has come since then, and how much further it still needs to go.


AI gets a free pass for another year as EU and US stall on regulation

#artificialintelligence

UK and EU governments are throwing themselves on the proverbial tram track that is AI ethical standards. The European Commission (EC) has already drafted some laws to regulate the use of AI, but reports suggest it'll take up to a year to actually get them in place. Right now, we're amid the crossfire in the AI badlands. The law is seemingly being pushed aside while new AI applications are established all over, wholly unregulated. According to Reuters (via AI News), two lawmakers involved with the EU's proceedings said the debate is tied up on whether facial recognition should be banned, and over who has the right to preside over the rules, and keep the AI in check.It's a similar situation in the US where there is still no federal regulation of artifical intelligency, but there is reportedly some US AI regulation "on the horizon." It will apparently take a different form, however, where the detailed framework the EC has proposed is exchanged for an agency-by-agency approach.The previous draft from the European Commission established some classifications for AI, depending on the level of risk that each system might pose to us as a species. These range from 'limited risk systems' such as chatbots and spam filters, right up to those of 'unacceptable risk'—i.e. anything exploitative, manipulative, or that might "conduct real-time biometric authentication in public spaces for law enforcement."That all sounds very Orwellian, but when we've got DeepMind training AIs to control nuclear fusion, you'd think facial recognition would be the least of our worries.'High risk' AI systems will be required to undergo heavy vetting, and be on some tight reigns in order to operate within the law. Regulations could include anything from human oversight, to mandatory risk management systems, or government registration. Any system deemed high risk will likely require some intense record keeping and logging, in case anything goes awry, and potentially for full disclosure of such records transparency to users.It seems at least that video game AI is poised for inclusion in the limited risk category, but who knows whether that will get bumped up a rung once everyone bails on reality, and makes the exodus into the metaverse.